首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   604篇
  免费   105篇
  国内免费   51篇
电工技术   7篇
综合类   19篇
化学工业   6篇
金属工艺   7篇
机械仪表   4篇
矿业工程   1篇
能源动力   2篇
轻工业   1篇
石油天然气   1篇
无线电   196篇
一般工业技术   13篇
冶金工业   1篇
自动化技术   502篇
  2024年   1篇
  2023年   2篇
  2022年   16篇
  2021年   17篇
  2020年   14篇
  2019年   17篇
  2018年   15篇
  2017年   71篇
  2016年   56篇
  2015年   33篇
  2014年   126篇
  2013年   55篇
  2012年   80篇
  2011年   66篇
  2010年   37篇
  2009年   39篇
  2008年   17篇
  2007年   47篇
  2006年   16篇
  2005年   9篇
  2004年   3篇
  2003年   1篇
  2002年   4篇
  2001年   6篇
  2000年   4篇
  1999年   5篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
排序方式: 共有760条查询结果,搜索用时 46 毫秒
1.
Every year, the rate at which technology is applied on areas of our everyday life is increasing at a steady pace. This rapid development drives the technology companies to design and fabricate their integrated circuits (ICs) in non-trustworthy outsourcing foundries to reduce the cost, thus, leaving space for a synchronous form of virus, known as Hardware Trojan (HT), to be developed. HTs leak encrypted information, degrade device performance or lead to total destruction. To reduce the risks associated with these viruses, various approaches have been developed aiming to prevent and detect them, based on conventional or machine learning methods. Ideally, any undesired modification made to an IC should be detectable by pre-silicon verification/simulation and post-silicon testing. The infected circuit can be inserted in different stages of the manufacturing process, rendering the detection of HTs a complicated procedure. In this paper, we present a comprehensive review of research dedicated to countermeasures against HTs embedded into ICs. The literature is grouped in four main categories; (a) conventional HT detection approaches, (b) machine learning for HT countermeasures, (c) design for security and (d) runtime monitor.  相似文献   
2.
We present a data-driven method for monitoring machine status in manufacturing processes. Audio and vibration data from precision machining are used for inference in two operating scenarios: (a) variable machine health states (anomaly detection); and (b) settings of machine operation (state estimation). Audio and vibration signals are first processed through Fast Fourier Transform and Principal Component Analysis to extract transformed and informative features. These features are then used in the training of classification and regression models for machine state monitoring. Specifically, three classifiers (K-nearest neighbors, convolutional neural networks and support vector machines) and two regressors (support vector regression and neural network regression) were explored, in terms of their accuracy in machine state prediction. It is shown that the audio and vibration signals are sufficiently rich in information about the machine that 100% state classification accuracy could be accomplished. Data fusion was also explored, showing overall superior accuracy of data-driven regression models.  相似文献   
3.
Information granules, such as e.g., fuzzy sets, capture essential knowledge about data and the key dependencies between them. Quite commonly, we may envision that information granules (fuzzy sets) have become a result of fuzzy clustering and therefore could be succinctly represented in the form of some fuzzy partition matrices. Interestingly, the same data set could be represented from various standpoints and this multifaceted view yields a collection of different partition matrices being reflective of the higher-order granular knowledge about the data. The levels of specificity of the clusters the data are organized into could be quite different—the larger the number of clusters, the more detailed insight into the structure of data becomes available. Given the granularity of the resulting constructs (rather than plain data themselves), one could view a collection of partition matrices as a certain type of a network of knowledge. Considering a variety of sources of knowledge encountered across the network, we are interested in forming consensus between them. In a nutshell, this leads to the construction of certain fuzzy partition matrices which “reconcile” the knowledge captured by the individual partition matrices. Given that the granularity of the sources of knowledge under consideration could vary quite substantially, we develop a unified optimization perspective by introducing fuzzy proximity matrices that are induced by the corresponding partition matrices. In the sequel, the optimization is realized on a basis of these proximity matrices. We offer a detailed algorithm and illustrate its performance using a series of numeric experiments.  相似文献   
4.
《Computers & Education》2007,49(3):691-707
In recent years, e-learning system has become more and more popular and many adaptive learning environments have been proposed to offer learners customized courses in accordance with their aptitudes and learning results. For achieving the adaptive learning, a predefined concept map of a course is often used to provide adaptive learning guidance for learners. However, it is difficult and time consuming to create the concept map of a course. Thus, how to automatically create a concept map of a course becomes an interesting issue. In this paper, we propose a Two-Phase Concept Map Construction (TP-CMC) approach to automatically construct the concept map by learners’ historical testing records. Phase 1 is used to preprocess the testing records; i.e., transform the numeric grade data, refine the testing records, and mine the association rules from input data. Phase 2 is used to transform the mined association rules into prerequisite relationships among learning concepts for creating the concept map. Therefore, in Phase 1, we apply Fuzzy Set Theory to transform the numeric testing records of learners into symbolic data, apply Education Theory to further refine it, and apply Data Mining approach to find its grade fuzzy association rules. Then, in Phase 2, based upon our observation in real learning situation, we use multiple rule types to further analyze the mined rules and then propose a heuristic algorithm to automatically construct the concept map. Finally, the Redundancy and Circularity of the concept map constructed are also discussed. Moreover, we also develop a prototype system of TP-CMC and then use the real testing records of students in junior high school to evaluate the results. The experimental results show that our proposed approach is workable.  相似文献   
5.
6.
视频中异常事件所体现的时空特征存在着较强的相关关系.针对视频异常事件发生的时空特征相关性而影响检测性能问题,提出了基于时空融合图网络学习的视频异常事件检测方法,该方法针对视频片段的特征分别构建空间相似图和时间连续图,将各片段对应为图中的节点,考虑各节点特征与其他节点特征的Top-k相似性动态形成边的权重,构成空间相似图;考虑各节点的m个时间段内的连续性形成边的权重,构成时间连续图.将空间相似图和时间连续图进行自适应加权融合形成时空融合图卷积网络,并学习生成视频特征.在排序损失中加入图的稀疏项约束降低图模型的过平滑效应并提升检测性能.在UCF-Crime和ShanghaiTech等视频异常事件数据集上进行了实验,以接收者操作曲线(receiver operating characteristic curve,ROC)以及曲线下面积(area under curve,AUC)值作为性能度量指标.在UCF-Crime数据集下,提出的方法在AUC上达到80.76%,比基准线高5.35%;在ShanghaiTech数据集中,AUC达到89.88%,比同类最好的方法高5.44%.实验结果表明:所提出的方法可有效提高视频异常事件检测的性能.  相似文献   
7.
In this paper, an intelligent agent (using the Fuzzy SARSA learning approach) is proposed to negotiate for bilateral contracts (BC) of electrical energy in Block Forward Markets (BFM or similar market environments). In the BFM energy markets, the buyers (or loads) and the sellers (or generators) submit their bids and offers on a daily basis. The loads and generators could employ intelligent software agents to trade energy in BC markets on their behalves. Since each agent attempts to choose the best bid/offer in the market, conflict of interests might happen. In this work, the trading of energy in BC markets is modeled and solved using Game Theory and Reinforcement Learning (RL) approaches. The Stackelberg equation concept is used for the match making among load and generator agents. Then to overcome the negotiation limited time problems (it is assumed that a limited time is given to each generator–load pairs to negotiate and make an agreement), a Fuzzy SARSA Learning (FSL) method is used. The fuzzy feature of FSL helps the agent cope with continuous characteristics of the environment and also prevents it from the curse of dimensionality. The performance of the FSL (compared to other well-known traditional negotiation techniques, such as time-dependent and imitative techniques) is illustrated through simulation studies. The case study simulation results show that the FSL based agent could achieve more profits compared to the agents using other reviewed techniques in the BC energy market.  相似文献   
8.
Quality Function Deployment (QFD) is a popular planning method often used to transform customer demands/requirements into the technical characteristics of a new or improved product or service. In order to better capture (and represent) the multifarious relationships between customer requirements and technical characteristics, and the relative weights among customer requirements, in this study a hybrid analytic network process (ANP)-weighted fuzzy methodology is proposed. The goal is to synthesize renowned capabilities of ANP and fuzzy logic to better rank technical characteristics of a product (or a service) while implementing QFD. To demonstrate the viability of the proposed methodology a real-world scenario, where a new equipment to squeeze the polyethylene pipes to stop the gas flow without damaging the pipes, is developed. The ranking of technical characteristics of the product is calculated using both crisp and fuzzy weights for illustration and comparison purposes.  相似文献   
9.
A goal of this study is to develop a Composite Knowledge Manipulation Tool (CKMT). Some of traditional medical activities are rely heavily on the oral transfer of knowledge, with the risk of losing important knowledge. Moreover, the activities differ according to the regions, traditions, experts’ experiences, etc. Therefore, it is necessary to develop an integrated and consistent knowledge manipulation tool. By using the tool, it will be possible to extract the tacit knowledge consistently, transform different types of knowledge into a composite knowledge base (KB), integrate disseminated and complex knowledge, and complement the lack of knowledge. For the reason above, I have developed the CKMT called as K-Expert and it has four advanced functionalities as follows. Firstly, it can extract/import logical rules from data mining (DM) with the minimum of effort. I expect that the function can complement the oral transfer of traditional knowledge. Secondly, it transforms the various types of logical rules into database (DB) tables after the syntax checking and/or transformation. In this situation, knowledge managers can refine, evaluate, and manage the huge-sized composite KB consistently with the support of the DB management systems (DBMS). Thirdly, it visualizes the transformed knowledge in the shape of decision tree (DT). With the function, the knowledge workers can evaluate the completeness of the KB and complement the lack of knowledge. Finally, it gives SQL-based backward chaining function to the knowledge users. It could reduce the inference time effectively since it is based on SQL query and searching not the sentence-by-sentence translation used in the traditional inference systems. The function will give the young researchers and their fellows in the field of knowledge management (KM) and expert systems (ES) more opportunities to follow up and validate their knowledge. Finally, I expect that the approach can present the advantages of mitigating knowledge loss and the burdens of knowledge transformation and complementation.  相似文献   
10.
Recommender systems apply data mining and machine learning techniques for filtering unseen information and can predict whether a user would like a given item. This paper focuses on gray-sheep users problem responsible for the increased error rate in collaborative filtering based recommender systems. This paper makes the following contributions: we show that (1) the presence of gray-sheep users can affect the performance – accuracy and coverage – of the collaborative filtering based algorithms, depending on the data sparsity and distribution; (2) gray-sheep users can be identified using clustering algorithms in offline fashion, where the similarity threshold to isolate these users from the rest of community can be found empirically. We propose various improved centroid selection approaches and distance measures for the K-means clustering algorithm; (3) content-based profile of gray-sheep users can be used for making accurate recommendations. We offer a hybrid recommendation algorithm to make reliable recommendations for gray-sheep users. To the best of our knowledge, this is the first attempt to propose a formal solution for gray-sheep users problem. By extensive experimental results on two different datasets (MovieLens and community of movie fans in the FilmTrust website), we showed that the proposed approach reduces the recommendation error rate for the gray-sheep users while maintaining reasonable computational performance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号